Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
16 views

Module 2 - Processes and Threads

The document discusses processes and inter-process communication. It covers process concepts like process states and the process control block. It also discusses process scheduling and context switching. Methods of inter-process communication include shared memory and message passing. Shared memory IPC uses synchronization mechanisms while message passing uses send and receive operations to exchange fixed or variable sized messages between processes.

Uploaded by

Kailash
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Module 2 - Processes and Threads

The document discusses processes and inter-process communication. It covers process concepts like process states and the process control block. It also discusses process scheduling and context switching. Methods of inter-process communication include shared memory and message passing. Shared memory IPC uses synchronization mechanisms while message passing uses send and receive operations to exchange fixed or variable sized messages between processes.

Uploaded by

Kailash
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 46

‫الجامعة السعودية االلكترونية‬

‫الجامعة السعودية االلكترونية‬

‫‪26/12/2021‬‬
College of Computing and Informatics
OPERATING SYSTEMS
Operating Systems
Module 2

Chapter 3: Processes
Chapter 4: Threads & Concurrency
CONTENTS
 Process Concept

 Process Scheduling

 Operations on Processes

 Inter-Process Communication

 IPC in Shared-Memory Systems

 IPC in Message-Passing Systems

 Multicore Programming

 Multithreading Models

 Threading Issues
WEEKLY LEARNING OUTCOMES
 Identify the separate components of a process and illustrate how they are
represented and scheduled in an operating system.

 Describe how processes are created and terminated in an operating


system, including developing programs using the appropriate system
calls that perform these operations.

 Compare and contrast inter-process communication using shared


memory and message passing.
WEEKLY LEARNING OUTCOMES
 Identify the basic components of a thread, and contrast threads and
processes.

 Describe the benefits and challenges of designing multithreaded


applications.
PROCESS CONCEPTS
Process: A program in execution; process execution must progress in
sequential fashion. No parallel execution of instructions of a single
process
An operating system executes a variety of programs that run as a
process.
Multiple parts
Text section—the executable code
Data section—global variables
Heap section—memory that is dynamically allocated during program
run time
Stack section—temporary data storage when invoking functions (such
as function parameters, return addresses, and local variables)
LAYOUT OF PROCESS IN MEMORY
PROCESS STATES
As a process executes, it changes state. The state of a process is defined in part by
the current activity of that process.
• New. The process is being created.
• Running. Instructions are being executed.
• Waiting. The process is waiting for some event to occur (such as an
I/O completion or reception of a signal).
• Ready. The process is waiting to be assigned to a processor.
• Terminated. The process has finished execution.
PROCESS STATES
PROCESS CONTROL BLOCK
Each process is represented in the operating system by a process control block
(PCB)—also called a task control block.
It contains many pieces of information associated with a specific process,
including these:
• Process state. The state may be new, ready, running, waiting, halted,
and so on.
• Program counter. The counter indicates the address of the next instruction
to be executed for this process.
• CPU registers. The registers vary in number and type, depending on
the computer architecture.
PROCESS CONTROL BLOCK
• CPU-scheduling information. This information includes a process
priority, pointers to scheduling queues, and any other scheduling parameters.
• Memory-management information. This information may include
such items as the value of the base and limit registers and the page tables, or
the segment tables.
• Accounting information. This information includes the amount of
CPU and real time used, time limits, account numbers, job or process
numbers, and so on.
• I/O status information. This information includes the list of I/O
devices allocated to the process, a list of open files, and so on.
PROCESS SCHEDULING
Process scheduler selects among available processes for next execution on
CPU core.
Goal: Maximize CPU use, quickly switch processes onto CPU core
Maintains scheduling queues of processes
Ready queue – set of all processes residing in main memory, ready and
waiting to execute
Wait queues – set of processes waiting for an event (i.e., I/O)
Processes migrate among the various queues
PROCESS SCHEDULING

.Queueing-diagram representation of process scheduling


CONTEXT SWITCH
When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process via a context switch
Context of a process represented in the PCB
Context-switch time is pure overhead; the system does no useful work while
switching
The more complex the OS and the PCB  the longer the context switch
Time dependent on hardware support
Some hardware provides multiple sets of registers per CPU  multiple contexts
loaded at once
CONTEXT SWITCH
OPERATIONS ON PROCESS
System must provide mechanisms for:
Process creation
The creating process is called a parent process, and the new
processes are called the children of that process.
When a process creates a new process, two possibilities for execution exist:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
There are also two address-space possibilities for the new process:
1. The child process is a duplicate of the parent process (it has the same
program and data as the parent).
2. The child process has a new program loaded into it.
OPERATIONS ON PROCESS
Process Termination
A process terminates when it finishes executing its final statement and asks the
operating system to delete it.
A parent may terminate the execution of one of its children for a variety of
reasons, such as these:
• The child has exceeded its usage of some of the resources that it has been
allocated.
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system does not allow a child to continue
if its parent terminates.
INTERPROCESS COMMUNICATION
Processes executing concurrently in the operating system may be either
independent processes or cooperating processes.

A process is independent if it does not share data with any other processes
executing in the system.

A process is cooperating if it can affect or be affected by the other processes


executing in the system.
INTERPROCESS COMMUNICATION
There are several reasons for providing an environment that allows process
cooperation:
• Information sharing. Since several applications may be interested in the same
piece of information (for instance, copying and pasting), we must provide an
environment to allow concurrent access to such information.
• Computation speedup. If we want a particular task to run faster, we must break it
into subtasks, each of which will be executing in parallel with the others.
• Modularity. We may want to construct the system in a modular fashion, dividing
the system functions into separate processes or threads.
INTERPROCESS COMMUNICATION
Two models of IPC

Shared memory

Message passing
INTERPROCESS COMMUNICATION
(a) Shared memory. (b) Message passing.
IPC IN SHARED-MEMORY SYSTEMS
An area of memory shared among the processes that wish to communicate

The communication is under the control of the users processes not the operating
system.

Major issues is to provide mechanism that will allow the user processes to
synchronize their actions when they access shared memory.
IPC IN SHARED-MEMORY SYSTEMS
Bounded-Buffer – Shared-Memory Solution

#define BUFFER_SIZE 10
typedef struct {
...
} item;

item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;

Solution is correct, but can only use BUFFER_SIZE-1 elements


PRODUCER PROCESS – SHARED MEMORY
item next_produced;

while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
CONSUMER PROCESS – SHARED MEMORY
item next_consumed;

while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;

/* consume the item in next consumed */


}
IPC IN MESSAGE PASSING SYSTEMS
Processes communicate with each other without resorting to shared variables

IPC facility provides two operations:


send(message)
receive(message)

The message size is either fixed or variable


IPC IN MESSAGE PASSING SYSTEMS
If processes P and Q wish to communicate, they need to:
Establish a communication link between them
Exchange messages via send/receive
Implementation issues:
How are links established?
Can a link be associated with more than two processes?
How many links can there be between every pair of communicating processes?
What is the capacity of a link?
Is the size of a message that the link can accommodate fixed or variable?
Is a link unidirectional or bi-directional?
IMPLEMENTATION OF COMMUNICATION LINK
Physical:
Shared memory
Hardware bus
Network
Logical:
Direct or indirect
Synchronous or asynchronous
Automatic or explicit buffering
DIRECT COMMUNICATION
Processes must name each other explicitly:
send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
Properties of communication link
Links are established automatically
A link is associated with exactly one pair of communicating processes
Between each pair there exists exactly one link
The link may be unidirectional, but is usually bi-directional.
INDIRECT COMMUNICATION
Messages are directed and received from mailboxes (also referred to as ports)
Each mailbox has a unique id
Processes can communicate only if they share a mailbox
Properties of communication link
Link established only if processes share a common mailbox
A link may be associated with many processes
Each pair of processes may share several communication links
Link may be unidirectional or bi-directional.
SINGLE AND MULTITHREADED PROCESS
BENEFITS
Responsiveness – may allow continued execution if part of process is blocked,
especially important for user interfaces

Resource Sharing – threads share resources of process, easier than shared


memory or message passing

Economy – cheaper than process creation, thread switching lower overhead


than context switching

Scalability – process can take advantage of multicore architectures


MULTICORE PROGRAMMING
Multicore or multiprocessor systems putting pressure on programmers,
challenges include:
Dividing activities
Balance
Data splitting
Data dependency
Testing and debugging
Parallelism implies a system can perform more than one task
simultaneously
Concurrency supports more than one task making progress
Single processor / core, scheduler providing concurrency
MULTICORE PROGRAMMING
 Concurrent execution on single-core system:

 Parallelism on a multi-core system:


MULTICORE PROGRAMMING

Types of parallelism
Data parallelism – distributes subsets of the same data across
multiple cores, same operation on each
Task parallelism – distributing threads across cores, each thread
performing unique operation
DATA AND TASK PARALLELISM
USER THREADS AND KERNEL THREADS
User threads - management done by user-level threads library
Three primary thread libraries:
POSIX Pthreads
Windows threads
Java threads
Kernel threads - Supported by the Kernel
Examples – virtually all general -purpose operating systems, including:
Windows
Linux
Mac OS X
iOS
Android
USER THREADS AND KERNEL THREADS
MULTITHREADING MODELS
Many-to-One

One-to-One

Many-to-Many
MANY TO ONE MODEL
Many user-level threads mapped to single kernel thread
One thread blocking causes all to block
Multiple threads may not run in parallel on muticore system because only one may
be in kernel at a time
Few systems currently use this model
Examples:
Solaris Green Threads
GNU Portable Threads
ONE TO ONE MODEL
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
Number of threads per process sometimes restricted due to overhead
Examples
Windows
Linux
MANY TO MANY MODEL
Allows many user level threads to be mapped to many kernel threads
Allows the operating system to create a sufficient number of kernel threads
Windows with the ThreadFiber package
Otherwise not very common
THREADING ISSUES
Semantics of fork() and exec() system calls

Signal handling

Synchronous and asynchronous

Thread cancellation of target thread

Asynchronous or deferred

Thread-local storage

Scheduler Activations
Main Reference
Chapter 3: Processes
Chapter 4: Threads & Concurrency
(Operating System Concepts by Silberschatz, Abraham, et al. 10th ed.,
ISBN: 978-1-119-32091-3, 2018)

Additional References
Chapter 2.1
Chapter 2.2
(Modern Operating Systems by Andrew S. Tanenbaum and Herbert Bos. 4th
ed., ISBN-10: 0-13-359162-X, ISBN-13: 978-0-13-359162-0, 2015)

sentation is mainly dependent on the textbook: Operating System Concepts by Silberschatz, Abrah
Thank You

You might also like