Module 2 Process
Module 2 Process
Processes
Outline
• Processes
• Process Scheduling algorithms
• Inter process Communication
• Examples of IPC Systems
• Threads
• Multi core Programming
• Multithreading Models
• Thread Libraries
• Thread issues
Process Concept
• An operating system executes a variety of programs that run as a
process.
• Process – a program in execution; process execution must
progress in sequential fashion. No parallel execution of instructions
of a single process
• Multiple parts
• The program code, also called text section
• Current activity including program counter, processor registers
• Stack containing temporary data
• Function parameters, return addresses, local variables
• Data section containing global variables
• Heap containing memory dynamically allocated during run time
Process Concept (Cont.)
• Program is passive entity stored on disk (executable
file); process is active
• Program becomes process when an executable file is loaded into
memory
• Execution of program started via GUI mouse clicks,
command line entry of its name, etc.
• One program can be several processes
• Consider multiple users executing the same program
Process in Memory
Memory Layout of a C Program
Process State
P1 P2 P3
0 24 27 30
P2 P3 P1
0 3 6 30
• SJF scheduling
P
chart
P P3 P2
4 1
0 3 9 16 24
• Commonly, α set to ½
Examples of Exponential
Averaging
• =0
• n+1 = n
• Recent history does not count
• =1
• n+1 = tn
• Only the actual last CPU burst counts
• If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …+(1 - )j tn -j + …+(1 - )n +1 0
• RR is 23ms:
Operations on Processes
• Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
Consumer Process – Shared
Memory
item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
while (true) {
/* produce an item in next produced */
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
Race Condition
• counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
• counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2
• Physical:
• Shared memory
• Hardware bus
• Network
• Logical:
• Direct or indirect
• Synchronous or asynchronous
• Automatic or explicit buffering
Direct Communication
• Processes must name each other explicitly:
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q
• Properties of communication link
• Links are established automatically
• A link is associated with exactly one pair of communicating processes
• Between each pair there exists exactly one link
• The link may be unidirectional, but is usually bi-directional
Indirect Communication
• Mailbox sharing
• P1, P2, and P3 share mailbox A
• P1, sends; P2 and P3 receive
• Who gets the message?
• Solutions
• Allow a link to be associated with at most two processes
• Allow only one process at a time to execute a receive operation
• Allow the system to select arbitrarily the receiver. Sender is
notified who the receiver was.
Synchronization
Message passing may be either blocking or non-blocking
• Producer
message next_produced;
while (true) {
/* produce an item in next_produced
*/
send(next_produced);
}
• Consumer
message next_consumed;
while (true) {
receive(next_consumed)
• Types of parallelism
• Data parallelism – distributes subsets of the same data across
multiple cores, same operation on each
• Task parallelism – distributing threads across cores, each
thread performing unique operation
Data and Task Parallelism
Amdahl’s Law
• Identifies performance gains from adding additional cores to an application
that has both serial and parallel components
• S is serial portion
• N processing cores
• That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores
results in speedup of 1.6 times
• As N approaches infinity, speedup approaches 1 / S
• But does the law take into account contemporary multicore systems?
Amdahl’s Law
User Threads and Kernel Threads
• Many-to-One
• One-to-One
• Many-to-Many
Many-to-One
• Windows Threads
• Linux Threads
Windows Threads
• Windows API – primary API for Windows applications
• Implements the one-to-one mapping, kernel-level
• Each thread contains
• A thread id
• Register set representing state of processor
• Separate user and kernel stacks for when thread runs in user mode or
kernel mode
• Private data storage area used by run-time libraries and dynamic link
libraries (DLLs)
• The register set, stacks, and private storage area are known as the
context of the thread
Linux Threads
• Linux refers to them as tasks rather than threads
• Thread creation is done through clone() system call
• clone() allows a child task to share the address space of the
parent task (process)
• Flags control behavior