Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Os Merged

Download as pdf or txt
Download as pdf or txt
You are on page 1of 140

Module - 3

Scheduling
Content:

- Processes Scheduling
- CPU Scheduling
- Pre-emptive Scheduling
- Non-Pre-emptive Scheduling
- Resource allocation and Management
- Deadlocks
- Deadlock Handling Mechanisms
Scheduling:

• While most programmes have some alternative CPU cycle shuffling and waiting for some
form of I/O.

• The time spent waiting for I/O is wasted in a simple machine running a single operation.

• A scheduling mechanism enables the CPU to be used by one process while another is
waiting for I/O.

• The idea is to make the operating system construction as "effective" and "reasonable" as
possible. It is subject to different in often complex situations, and often subject to
changing policy objectives.
CPU Cycle:

CPU cycle generally, the time needed for one basic processor operation, such as an
addition, to be executed; this time is usually the reciprocal clock rate. The word was
traditionally used for the time needed for one basic (e.g. add or subtract) computer
instruction to be fetched and executed.
CPU Scheduler:

It is the task of the CPU Scheduler to pick another process from the ready
queue to run next once the CPU becomes idle.

A FIFO queue is not necessarily the storage structure for the ready queue and
the algorithm used to pick the next process.

As well as various customizable parameters for each algorithm, there are many
alternatives to choose from, which is the fundamental subject of this entire
module.
Pre-emptive and non pre- emptive Scheduling:
One of the following situation, CPU scheduling decisions take place:

i) Whenever a process terminates.

ii) Whenever a process transitions from the running state to the waiting state, such
as a wait() system call for an I/O request or invocation.

iii) Whenever a process switches from the waiting state to the ready state, say at
completion of I/O or a return from wait( ).

iv) Whenever a process switches from the running state to the ready state, for
example in response to an interrupt.
Dispatcher:
- The dispatcher is the module that gives control of the CPU to the process selected
by the scheduler. This function involves:

• Switching context.
• Switching to user mode.
• Jumping to the proper location in the newly loaded program.

- The dispatcher needs to be as fast as possible, as it is run on every context switch.


The time consumed by the dispatcher is known as dispatch latency.
Scheduling Criteria:

- There are several different criteria to consider when trying to select the
"best" scheduling algorithm for a particular situation and environment,
including:

• CPU utilization- a computer's usage of processing resources


• Throughput - the amount of work completed in a unit of time
• Turnaround time
• Waiting time
• Load average
• Response time
FCFS

Process Burst Time


P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
FCFS
First-Come First-Serve Scheduling, FCFS:
First-Come First-Serve Scheduling, FCFS:
Shortest-Job-First Scheduling, SJF:
Shortest-Job-First Scheduling, SJF:
Priority Scheduling, PS:
Round Robin (RR)
ROUND ROBIN(RR)

Process Burst Time


P1 24
P2 3
P3 3
• The Gantt chart is:
TIME QUANTUM AND CONTEXT SWITCH
TIME
FCFS
Shortest-Job-First Scheduling, SJF:
Shortest-Job-First Scheduling, SJF:
Shortest-Job-First Scheduling, SJF:
Shortest Remaining Time First Scheduling, SRTF:
Round Robin Scheduling, RRS:
Round Robin Scheduling, RRS:
Round Robin Scheduling, RRS:
Round Robin Scheduling, RRS:
Round Robin Scheduling, RRS:
Multilevel Queue Scheduling, MQS:
Multilevel Feedback-Queue Scheduling, MFQS:
Resource Allocation and Management:
Aims and Objectives:

- Some issues of resource allocation

- Allocation Mechanisms & Allocation Policies


Allocation Mechanisms:
A resource is a component of the computer system for which processes can compete. Typically these are: Central
Processors, Memory, Peripherals, Backing Store, Files.
• Allocating Resources Issues:

- Mutual Exclusion of processes from non-shareable resources.

- Deadlock should be handled sensibly.

- Ensure high level of resource utilization.


Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Deadlock can arise if the following four conditions hold simultaneously
(Necessary Conditions) :

• Mutual Exclusion: One or more than one resource are non-shareable (Only one process can
use at a time)

• Hold and Wait: A process is holding at least one resource and waiting for resources.

• No Preemption: A resource cannot be taken from a process unless the process releases the
resource.

• Circular Wait: A set of processes are waiting for each other in circular form.
Suppose n processes, P1, …. Pn share m identical resource units, which can be
reserved and released one at a time. The maximum resource requirement of process Pi
is Si, where Si > 0.
Deadlock can be solved by preventing any one of the conditions from
holding
Detection and Recovery
Resource Allocation graph describes the deadlock more precisely:
DEADLOCK EXAMPLES
SYSTEM MODEL

• System consists of resources


• Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices

• Each resource type Ri has Wi instances.


• Each process utilizes a resource as follows:
• request
• use
• release
DEADLOCK CHARACTERIZATION
Deadlock can arise if four conditions hold simultaneously.

• Mutual exclusion: only one process at a time can use a resource


• Hold and wait: a process holding at least one resource is waiting to
acquire additional resources held by other processes
• No preemption: a resource can be released only voluntarily by the
process holding it, after that process has completed its task
• Circular wait: there exists a set {P0, P1, …, Pn} of waiting processes such
that P0 is waiting for a resource that is held by P1, P1 is waiting for a
resource that is held by P2, …, Pn–1 is waiting for a resource that is held by
Pn, and Pn is waiting for a resource that is held by P0.
RESOURCE-ALLOCATION GRAPH
A set of vertices V and a set of edges E.

• V is partitioned into two types:


• P = {P1, P2, …, Pn}, the set consisting of all the processes in the system

• R = {R1, R2, …, Rm}, the set consisting of all resource types in the system

• request edge – directed edge Pi  Rj

• assignment edge – directed edge Rj  Pi


RESOURCE-ALLOCATION GRAPH (CONT.)
• Process

• Resource Type with 4 instances

• Pi requests instance of Rj Pi
Rj

• Pi is holding an instance of Rj
Pi
Rj
EXAMPLE OF A RESOURCE ALLOCATION GRAPH
RESOURCE ALLOCATION GRAPH WITH A DEADLOCK
GRAPH WITH A CYCLE BUT NO DEADLOCK
BASIC FACTS

• If graph contains no cycles  no deadlock


• If graph contains a cycle 
• if only one instance per resource type, then deadlock
• if several instances per resource type, possibility of deadlock
BASIC FACTS

• If a system is in safe state  no deadlocks

• If a system is in unsafe state  possibility of deadlock

• Avoidance  ensure that a system will never enter an unsafe state.


AVOIDANCE ALGORITHMS

• Single instance of a resource type


• Use a resource-allocation graph

• Multiple instances of a resource type


• Use the banker’s algorithm
RESOURCE-ALLOCATION GRAPH
Avoidance by Anticipation

In deadlock avoidance, the request for any resource will be granted if the resulting
state of the system doesn't cause deadlock in the system. The state of the system will
continuously be checked for safe and unsafe states.

In order to avoid deadlocks, the process must tell OS, the maximum number of
resources a process can request to complete its execution.

The simplest and most useful approach states that the process should declare the
maximum number of resources of each type it may ever need. The Deadlock
avoidance algorithm examines the resource allocations so that there can never be a
circular wait condition.
Example 1:
Example 2 :
DEADLOCK HANDLING MECHANISMS

• Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and
circular wait holds simultaneously. If it is possible to violate one of the four conditions at
any time then the deadlock can never occur in the system.
Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is in safe state
or in unsafe state at every step which the operating system performs. The process
continues until the system is in safe state. Once the system moves to unsafe state, the
OS has to backtrack one step.
• Deadlock detection and recovery
This approach let the processes fall in deadlock and then periodically check whether
deadlock occur in the system or not. If it occurs then it applies some of the recovery
methods to the system to get rid of deadlock.
RECOVERY FROM DEADLOCK: PROCESS TERMINATION

• Abort all deadlocked processes

• Abort one process at a time until the deadlock cycle is eliminated

• In which order should we choose to abort?


1. Priority of the process
2. How long process has computed, and how much longer to completion
3. Resources the process has used
4. Resources process needs to complete
5. How many processes will need to be terminated
6. Is process interactive or batch?
RECOVERY FROM DEADLOCK: RESOURCE PREEMPTION

• Selecting a victim – minimize cost

• Rollback – return to some safe state, restart process for that state

• Starvation – same process may always be picked as victim, include


number of rollback in cost factor
References:
1. Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin, "Operating System Concepts, Eighth
Edition”
1

Module 4
Concurrency
2

Inter Process Communication


3

IPC- process executing concurrently in


os either
Independent Process
Cooperating process
information sharing.
computation speedup.
Modularity
convenience
4
Interprocess Communication
 Used to exchange data and information among cooperating process
 2 Models for IPC
 Shared Memory
 A region of memory is shared among cooperating processes
 Data can be exchanged by reading and writing it in the shared region
 Faster
 System call used to create a shared memory
 Message Passing
 Small size data exchange
 Easier to implement
 Uses System Calls. More kernel intervention when compare to shared memory
Communication Models 5
Shared Memory 6

 Shared memory region resides in the address space of the process creating
the shared memory
 Other processes can attach with this shared memory segment for
communication
 This concept allow one process to access the memory space of others
 Reading and writing data in the shared memory creates the exchange of
information between the processes
 Simultaneous writing on the same location should be avoided
 Ex of Cooperating Processes: Producer – Consumer Problem
 Producer process produces the product that consumed by the Consumer
process
Shared Memory – Producer Consumer Problem 7

 Producer – Consumer problem can be solved by Shared Memory


 It allows Producer and Consumer run simultaneously by sharing the buffer
space where items are placed and consumed
 Buffer memory region is shared by two processes
 These two processes must be synchronized  consumer should not try to
consume an item if produced has not yet produced the items
 2 types of Buffer
 Bounded buffer – fixed buffer size
 Unbounded buffer – no limit in the buffer size
8

 Processes can execute concurrently


 May be interrupted at any time, partially completing execution
 Concurrent access to shared data may result in data inconsistency
 Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes.

 Illustration of the problem:


Suppose that we wanted to provide a solution to the consumer-producer
problem that fills all the buffers. We can do so by having an integer counter
that keeps track of the number of full buffers. Initially, counter is set to 0. It is
incremented by the producer after it produces a new buffer and is decremented
by the consumer after it consumes a buffer.
Process Synchronization
 Background
 The Critical-Section Problem
 Peterson’s Solution
 Synchronization Hardware
 Mutex Locks
 Semaphores
 Classic Problems of Synchronization
 Monitors
 Synchronization Examples
Objectives
 To present the concept of process synchronization.
 To introduce the critical-section problem, whose solutions
can be used to ensure the consistency of shared data
 To present both software and hardware solutions of the
critical-section problem
 To examine several classical process-synchronization
problems
 To explore several tools that are used to solve process
synchronization problems
Background
 Processes can execute concurrently
 May be interrupted at any time, partially completing execution
 Concurrent access to shared data may result in data inconsistency
 Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes
 Illustration of the problem:
Suppose that we wanted to provide a solution to the consumer-
producer problem that fills all the buffers. We can do so by having
an integer counter that keeps track of the number of full
buffers. Initially, counter is set to 0. It is incremented by the
producer after it produces a new buffer and is decremented by
the consumer after it consumes a buffer.
Producer
while (true) {
/* produce an item in next produced */

while (counter == BUFFER_SIZE) ;


/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Consumer
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
Race Condition
A race condition is an undesirable situation that occurs when a device
or system attempts to perform two or more operations at the same time
Critical Section Problem
 Consider system of n processes {p0, p1, … pn-1}
 Each process has critical section segment of code
 Process may be changing common variables, updating table,
writing file, etc
 When one process in critical section, no other may be in its critical
section
 Critical section problem is to design protocol to solve this
 Each process must ask permission to enter critical section in
entry section, may follow critical section with exit section,
then remainder section
Critical Section

 General structure of process Pi


Algorithm for Process Pi
do {

while (turn == j);


critical section
turn = j;
remainder section
} while (true);
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections
2. Progress - If no process is executing in its critical section and
there exist some processes that wish to enter their critical
section, then the selection of the processes that will enter the
critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of
times that other processes are allowed to enter their critical
sections after a process has made a request to enter its critical
section and before that request is granted.
Critical-Section Handling in OS
Two approaches depending on if kernel is preemptive or non-
preemptive
 Preemptive – allows preemption of process when running in
kernel mode
 Non-preemptive – runs until exits kernel mode, blocks, or
voluntarily yields CPU
 Essentially free of race conditions in kernel mode
Peterson’s Solution
 Good algorithmic description of solving the problem
 Two process solution
 Assume that the load and store machine-language instructions
are atomic; that is, cannot be interrupted
 The two processes share two variables:
 int turn;
 Boolean flag[2]

 The variable turn indicates whose turn it is to enter the critical


section
 The flag array is used to indicate if a process is ready to enter the
critical section. flag[i] = true implies that process Pi is ready!
Algorithm for Process Pi
do {
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);
Peterson’s Solution (Cont.)
 Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
Synchronization Hardware
 Many systems provide hardware support for implementing the
critical section code.
 All solutions below based on idea of locking
 Protecting critical regions via locks
 Uniprocessors – could disable interrupts
 Currently running code would execute without preemption
 Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable
 Modern machines provide special atomic hardware instructions
 Atomic = non-interruptible
 Either test memory word and set value
 Or swap contents of two memory words
Solution to Critical-section Problem Using Locks

do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
test_and_set Instruction
Definition:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
1.Executed atomically
2.Returns the original value of passed parameter
3.Set the new value of passed parameter to “TRUE”.
Solution using test_and_set()
 Shared Boolean variable lock, initialized to FALSE
 Solution:
do {
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);
compare_and_swap Instruction
Definition:
int compare _and_swap(int *value, int expected, int new_value) {
int temp = *value;

if (*value == expected)
*value = new_value;
return temp;
}
1.Executed atomically
2.Returns the original value of passed parameter “value”
3.Set the variable “value” the value of the passed parameter “new_value”
but only if “value” ==“expected”. That is, the swap takes place only under
this condition.
Solution using compare_and_swap
 Shared integer “lock” initialized to 0;
 Solution:
do {
while (compare_and_swap(&lock, 0, 1) != 0)
; /* do nothing */
/* critical section */
lock = 0;
/* remainder section */
} while (true);
Mutex Locks
OS designers build software tools to solve critical section
problem
Simplest is mutex lock
Protect a critical section by first acquire() a lock then
release() the lock
Boolean variable indicating if lock is available or not
Calls to acquire() and release() must be atomic
Usually implemented via hardware atomic instructions
But this solution requires busy waiting
This lock therefore called a spinlock
acquire() and release()
 acquire() {
while (!available)
; /* busy wait */
available = false;
}
 release() {
available = true;
}
 do {
acquire lock
critical section
release lock
remainder section
} while (true);
31
32
33
34
Bakery Algorithm

Critical Section for n processes:


 Before entering its critical section, a process receives a number (like in a bakery).
Holder of the smallest number enters the critical section.
 The numbering scheme here always generates numbers in increasing order of
enumeration;
 If processes Pi and Pj receive the same number,
 if i < j, then Pi is served first;
 else
 Pj is served first (PID assumed unique).
Bakery Algorithm (2)
Choosing a number:
 max (a0,…, an-1) is a number k, such that k  ai for i = 0, …, n – 1
Notation for lexicographical order (ticket #, PID #)
Shared data:
boolean choosing[n];
int number[n];
Data structures are initialized to FALSE and 0, respectively.
Bakery Algorithm for Pi
do {
choosing[i] = TRUE;
number[i] = max(number[0], …, number[n – 1]) +1;
choosing[i] = FALSE;
for (j = 0; j < n; j++) {
while (choosing[j]) ;
while ((number[j] != 0) &&
((number[j],j) < (number[i],i))) ;
}
critical section
number[i] = 0;
remainder section
} while (TRUE);
Semaphore
 Synchronization tool that provides more sophisticated ways (than Mutex locks) for process to
synchronize their activities.
 Semaphore S – integer variable
 Can only be accessed via two indivisible (atomic) operations
 wait() and signal()
 Originally called P() and V()
 Definition of the wait() operation
wait(S) {
while (S <= 0); // busy wait
S--;
}
 Definition of the signal() operation
signal(S) {
S++;
}
Semaphore Usage
 Counting semaphore – integer value can range over an unrestricted domain
 Binary semaphore – integer value can range only between 0 and 1
40
41
Semaphore Implementation
 Must guarantee that no two processes can execute the wait()
and signal() on the same semaphore at the same time
 Thus, the implementation becomes the critical section problem
where the wait and signal code are placed in the critical
section
 Could now have busy waiting in critical section implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied
 Note that applications may spend lots of time in critical sections
and therefore this is not a good solution
 Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes

 Starvation – indefinite blocking


 A process may never be removed from the semaphore queue in which it is
suspended
 Priority Inversion – Scheduling problem when lower-priority process
holds a lock needed by higher-priority process
 Solved via priority-inheritance protocol
Classical Problems of Synchronization
 Classical problems used to test newly-proposed synchronization
schemes
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem
Bounded-Buffer Problem
 n buffers, each can hold one item
 Semaphore mutex initialized to the value 1
 Semaphore full initialized to the value 0
 Semaphore empty initialized to the value n
Bounded Buffer Problem (Cont.)
 The structure of the producer process

do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
Bounded Buffer Problem (Cont.)
 The structure of the consumer process

Do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);
Readers-Writers Problem
 A data set is shared among a number of concurrent processes
 Readers – only read the data set; they do not perform any updates
 Writers – can both read and write
 Problem – allow multiple readers to read at the same time
 Only one single writer can access the shared data at the same time
 Several variations of how readers and writers are considered – all involve
some form of priorities
 Shared Data
 Data set
 Semaphore rw_mutex initialized to 1
 Semaphore mutex initialized to 1
 Integer read_count initialized to 0
Readers-Writers Problem (Cont.)
 The structure of a writer process

do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
} while (true);
Readers-Writers Problem (Cont.)
 The structure of a reader process } while (true);
do {
wait(mutex);
read_count++;  The structure of a writer process
if (read_count == 1)
wait(wrt); do {
signal(mutex); wait(wrt);
... ...
/* writing is performed */
/* reading is
performed */ ...
signal(wrt);
...
} while (true);
wait(mutex);
read count--;
if (read_count == 0)
signal(wrt);
signal(mutex);
51
Dining-Philosophers Problem

 Philosophers spend their lives alternating thinking and eating


 Don’t interact with their neighbors, occasionally try to pick up 2 chopsticks (one at a
time) to eat from bowl
 Need both to eat, then release both when done
 In the case of 5 philosophers
 Shared data
 Bowl of rice (data set)
 Semaphore chopstick [5] initialized to 1
Dining-Philosophers Problem Algorithm
 The structure of Philosopher i:
do {
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );

// eat

signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

} while (TRUE);
 What is the problem with this algorithm?
55
Dining-Philosophers Problem Algorithm (Cont.)

 Deadlock handling
 Allow at most 4 philosophers to be sitting simultaneously at
the table.
 Allow a philosopher to pick up the forks only if both are
available (picking must be done in a critical section.
 Use an asymmetric solution -- an odd-numbered
philosopher picks up first the left chopstick and then the
right chopstick. Even-numbered philosopher picks up first
the right chopstick and then the left chopstick.
Problems with Semaphores
 Incorrect use of semaphore operations:

 signal (mutex) …. wait (mutex)

 wait (mutex) … wait (mutex)

 Omitting of wait (mutex) or signal (mutex) (or both)

 Deadlock and starvation are possible.


Monitors
 A high-level abstraction that provides a convenient and effective mechanism for
process synchronization
 Abstract data type, internal variables only accessible by code within the procedure
 Only one process may be active within the monitor at a time

monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code (…) { … }


}
}
Schematic view of a Monitor
Condition Variables
 condition x, y;
 Two operations are allowed on a condition variable:
 x.wait() – a process that invokes the operation is suspended
until x.signal()
 x.signal() – resumes one of processes (if any) that invoked
x.wait()
 If no x.wait() on the variable, then it has no effect on the variable
Monitor with Condition Variables
Condition Variables Choices
 If process P invokes x.signal(), and process Q is suspended in
x.wait(), what should happen next?
 Both Q and P cannot execute in paralel. If Q is resumed, then P must wait
 Options include
 Signal and wait – P waits until Q either leaves the monitor or it waits for another
condition
 Signal and continue – Q waits until P either leaves the monitor or it waits for
another condition
 Both have pros and cons – language implementer can decide
 Monitors implemented in Concurrent Pascal compromise
 P executing signal immediately leaves the monitor, Q is resumed
 Implemented in other languages including C#, Java
63
Solution to Dining Philosophers (Cont.)

 Each philosopher i invokes the operations pickup() and


putdown() in the following sequence:

DiningPhilosophers.pickup(i);

EAT

DiningPhilosophers.putdown(i);

 No deadlock, but starvation is possible


Resuming Processes within a Monitor
 If several processes queued on condition x, and x.signal()
executed, which should be resumed?
 FCFS frequently not adequate
 conditional-wait construct of the form x.wait(c)
 Where c is priority number
 Process with lowest number (highest priority) is scheduled next
Bakery Algorithm in Process 66

Synchronization

 The Bakery algorithm is one of the simplest known solutions to


the mutual exclusion problem for the general case of N process.
 Bakery Algorithm is a critical section solution for N processes. The
algorithm preserves the first come first serve property.
67

 Before entering its critical section, the process receives a number.


Holder of the smallest number enters the critical section.
 If processes Pi and Pj receive the same number
 if i < j
 Pi is served first;
 else
 Pj is served first.
 The numbering scheme always generates numbers in increasing
order of enumeration; i.e., 1, 2, 3, 3, 3, 3, 4, 5
68

IPC in Unix
 Inter process communication (IPC) refers to the coordination of
activities among cooperating processes. A common example of
this need is managing access to a given system resource.

 Pipes are a simple synchronized way of passing information


between two processes.
 A pipe can be viewed as a special file that can store only a limited69
amount of data and uses a FIFO access scheme to retrieve data.

 In a logical view of a pipe, data is written to one end and read


from the other.

 The processes on the ends of a pipe have no easy way to identify


what process is on the other end of the pipe.
70

 Multi processor and locking


 Scalable locks
 Lock free coordination
 Lock-free synchronization avoids many serious problems caused by
locks: considerable overhead, concurrency bottlenecks, deadlocks, and
priority inversion in real-time scheduling.
71

References

 Silberschatz, Gagne, Galvin: Operating System Concepts, 6th Edition

You might also like