Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Chapter 5: ProcessChapter 5: Process
SynchronizationSynchronization
Operating Systems (0307371)
Dr. Ra’Fat Ahmad AL-msie’Deen
https://sites.google.com/site/ralmsideen/
Silberschatz, Galvin and Gagne ©2013Operating System Concepts – 9th
Edition
Text BookText Book::
This material is based on chapter 5 of “(Operating System
Concepts, [9th Edition])” by Silberschatz et al.,
6.3
Chapter 6: Process SchedulingChapter 6: Process Scheduling
Background
The Critical-Section Problem
Peterson’s Solution
Synchronization Hardware
Semaphores
6.4
ObjectivesObjectives
To introduce the critical-section problem, whose solutions can be used
to ensure the consistency of shared data
To present both software and hardware solutions of the critical-section
problem
To examine several classical process-synchronization problems
To explore several tools that are used to solve process
synchronization problems
6.5
BackgroundBackground
Synchronization
 Using atomic operations (all occur, or all do not occur ) to ensure correct
cooperation between processes (eliminate race conditions).
Processes can execute concurrently
 May be interrupted at any time, partially completing execution
Concurrent access to shared data may result in data inconsistency (variation)
Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes
 Illustration of the problem:
 Suppose that we wanted to provide a solution to the consumer-producer
problem that fills all the buffers. We can do so by having an integer
counter that keeps track of the number of full buffers. Initially, counter is
set to 0. It is incremented by the producer after it produces a new buffer
and is decremented by the consumer after it consumes a buffer.
6.6
Producer-Consumer ProblemProducer-Consumer Problem
Paradigm for cooperating processes, producer process produces
information that is consumed by a consumer process.
unbounded-buffer places no practical limit on the size of the buffer
bounded-buffer assumes that there is a fixed buffer size
6.7
ProducerProducer
while (true) {
/* produce an item in next produced */
while (counter == BUFFER SIZE) ;
/* do nothing */
buffer[in] = next produced;
in = (in + 1) % BUFFER SIZE;
counter++;
}
6.8
ConsumerConsumer
while (true) {
while (counter == 0); /*do nothing*/
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
counter--;
/*consume the item in next consumed*/
}
6.9
Race ConditionRace Condition
counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2
Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}
Reg
1
6
5
Reg
2
4
3
2
1
6.10
Race Condition …Race Condition …
Notice that we have arrived at the incorrect state “counter == 4”,
indicating that four buffers are full, when, in fact, ve buffers are fullfi . If we
reversed the order of the statements at T4 and T5 , we would arrive at
the incorrect state “counter == 6”.
 We would arrive at this incorrect state because we allowed both
processes to manipulate the variable counter concurrently.
A situation like this,
Where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the
particular order in which the access takes place, is called a race
condition.
Reg
1
6
5
Reg
2
4
3
6.11
Critical Section ProblemCritical Section Problem
Consider system of n processes {p0, p1, … pn-1}
Each process has critical section segment of code
Process may be changing common variables, updating table,
writing file, etc.
When one process in critical section, no other may be in its
critical section.
Critical section problem is to design protocol to solve this.
Each process must ask permission to enter critical section in entry
section, may follow critical section with exit section, then
remainder section
6.12
Critical SectionCritical Section
General structure of process pi is
6.13
BackgroundBackground
Mutual Exclusion
 Mechanisms that ensure that only one person or process is
doing certain things at one time (others are excluded).
 E.g. only one person goes shopping at a time.
Critical Section
 A section of code, or collection of operations, in which only one
process may be executing at a given time.
6.14
Solution to Critical-Section ProblemSolution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections.
2. Progress - If no process is executing in its critical section and there exist
some processes that wish to enter their critical section, then the selection of
the processes that will enter the critical section next cannot be postponed
indefinitely.
 processes must not leave the critical section empty if they are waiting.
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the N processes
6.15
Peterson’s SolutionPeterson’s Solution
Good algorithmic description of solving the problem
Two process solution
Assume that the LOAD and STORE instructions are atomic; that is, cannot
be interrupted.
The two processes share two variables:
int turn;
Boolean flag[2]
The variable turn indicates whose turn it is to enter the critical section.
The flag array is used to indicate if a process is ready to enter the critical
section. flag[i] = true implies that process Pi is ready!
6.16
Algorithm for ProcessAlgorithm for Process PPii
• Provable that:
1. Mutual exclusion is preserved
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
6.17
Synchronization HardwareSynchronization Hardware
Many systems provide hardware support for critical section code
All solutions below based on idea of locking
 Protecting critical regions via locks
Uniprocessors – could disable interrupts
The critical-section problem could be solved simply in a single-
processor environment if we could prevent interrupts from occurring
while a shared variable was being modi ed.fi
Currently running code would execute without preemption
Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable
Modern machines provide special atomic hardware instructions
 Atomic = non-interruptible
Either test memory word and set value
Or swap contents of two memory words
6.18
Mutex LocksMutex Locks
OS designers build software tools to solve critical section problem
Simplest is mutex lock
 In fact, the term mutex is short for mutual exclusion
We use the mutex lock to protect critical regions and thus prevent race
conditions. That is, a process must acquire the lock before entering a critical
section; it releases the lock when it exits the critical section.
Product critical regions with it by first acquire() a lock then release() it
Boolean variable indicating if lock is available or not
Calls to acquire() and release() must be atomic
Usually implemented via hardware atomic instructions
But this solution requires busy waiting
This lock therefore called a spinlock
6.19
acquire() and release()acquire() and release()
 The acquire()function
acquires the lock, and
the release() function
releases the lock, as
illustrated in Figure x.
Figure x. Solution to the critical-section problem using mutex locks.
6.20
SemaphoreSemaphore
Synchronization tool that does not require busy waiting
Semaphore S – integer variable
Two standard operations modify S: wait() and signal()
Originally called P() and V()
Less complicated
Can only be accessed via two indivisible (atomic) operations
wait (S) {
while (S <= 0)
; // busy wait
S--;
}
signal (S) {
S++;
}
6.21
Semaphore UsageSemaphore Usage
Semaphore as General Synchronization Tool
Counting semaphore – integer value can range over an
unrestricted domain
Binary semaphore – integer value can range only between 0
and 1
Then a mutex lock
Can implement a counting semaphore S as a binary semaphore
Can solve various synchronization problems
Consider P1 and P2 that require S1 to happen before S2
P1:
S1;
signal(synch);
P2:
wait(synch);
S2;
6.22
Semaphore ImplementationSemaphore Implementation
Must guarantee that no two processes can execute wait () and signal ()
on the same semaphore at the same time.
Thus, implementation becomes the critical section problem where the
wait and signal code are placed in the critical section.
 Could now have busy waiting in critical section implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical sections and
therefore this is not a good solution.
In the busy waiting, CPU cycles will be wasted, because of dispatch busy
waiting
 So, Remove it from the RQ to improve performance.
6.23
Semaphore Implementation with no BusySemaphore Implementation with no Busy
waitingwaiting
With each semaphore there is an associated waiting queue
(WQ)
Each entry in a waiting queue has two data items:
value (of type integer)
pointer to next record in the list
Two operations:
block – place the process invoking the operation on the
appropriate waiting queue.
wakeup – remove one of processes in the waiting queue and
place it in the ready queue (RQ)
6.24
Semaphore Implementation with no Busy waitingSemaphore Implementation with no Busy waiting
……
 If process P finds (S <= 0), it blocks it self, and moves it from the RQ to
the WQ associated with semaphore S  P Block.
 block – place the process invoking the operation on the appropriate
waiting queue.
 when S becomes > zero, P will be moved back to the RQ P
Wakeup,
 wakeup – remove one of processes in the waiting queue and place it
in the ready queue.
 So, P disappeared and hidden from the schedule and CPU until the S
becomes Ready.
 Deadlock and Starvation
 Deadlock – two or more processes are waiting indefinitely for an
event that can be caused by only one of the waiting processes.
 Starvation – indefinite blocking
 A process may never be removed from the semaphore queue in
which it is suspended.
6.25
Classical Problems ofClassical Problems of
SynchronizationSynchronization
Classical problems used to test newly-proposed synchronization
schemes
Bounded-Buffer Problem
Readers and Writers Problem
Dining-Philosophers Problem
6.26
Bounded-Buffer ProblemBounded-Buffer Problem
N buffers, each can hold one item
Semaphore mutex initialized to the value 1
Semaphore full initialized to the value 0
Semaphore empty initialized to the value N.
 Consumer must wait until buffer full to consume
 Producer must wait until buffer empty to produce
6.27
Readers-Writers ProblemReaders-Writers Problem
A data set is shared among a number of concurrent processes
Readers – only read the data set; they do not perform any updates
Writers – can both read and write.
Problem – allow multiple readers to read at the same time.
 Only one single writer can access the shared data at the same time.
Several variations of how readers and writers are treated – all involve priorities.
Shared Data
Data set
Semaphore rw_mutex initialized to 1
Semaphore mutex initialized to 1
Integer read_count initialized to 0
6.28
Readers-Writers Problem VariationsReaders-Writers Problem Variations
First variation – no reader kept waiting unless writer has permission
to use shared object
Second variation – once writer is ready, it performs write asap
Both may have starvation leading to even more variations
Problem is solved on some systems by kernel providing reader-
writer locks
6.29
Dining-Philosophers ProblemDining-Philosophers Problem
Shared data
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
chopstick [5] = {1,1,1,1,1}
Philosophers
Either think or eat
Share a circular table having
 n chairs (for n philosophers)
 n single chopsticks
When hungry
 Try to pick up two chopsticks (left and right)
 Can eat only when she has both chopsticks
When finished eating
 Put down both chopsticks
 Start thinking
6.30
Problems with SemaphoresProblems with Semaphores
Incorrect use of semaphore operations:
signal (mutex) …. wait (mutex)
wait (mutex) … wait (mutex)
Omitting of wait (mutex) or signal (mutex) (or both)
Deadlock and starvation.
Chapter 5: Process SynchronizationChapter 5: Process Synchronization
Operating Systems (0307371)
Dr. Ra’Fat Ahmad AL-msie’Deen
https://sites.google.com/site/ralmsideen/
End of Chapter 5End of Chapter 5

More Related Content

Operating Systems - "Chapter 5 Process Synchronization"

  • 1. Chapter 5: ProcessChapter 5: Process SynchronizationSynchronization Operating Systems (0307371) Dr. Ra’Fat Ahmad AL-msie’Deen https://sites.google.com/site/ralmsideen/
  • 2. Silberschatz, Galvin and Gagne ©2013Operating System Concepts – 9th Edition Text BookText Book:: This material is based on chapter 5 of “(Operating System Concepts, [9th Edition])” by Silberschatz et al.,
  • 3. 6.3 Chapter 6: Process SchedulingChapter 6: Process Scheduling Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores
  • 4. 6.4 ObjectivesObjectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data To present both software and hardware solutions of the critical-section problem To examine several classical process-synchronization problems To explore several tools that are used to solve process synchronization problems
  • 5. 6.5 BackgroundBackground Synchronization  Using atomic operations (all occur, or all do not occur ) to ensure correct cooperation between processes (eliminate race conditions). Processes can execute concurrently  May be interrupted at any time, partially completing execution Concurrent access to shared data may result in data inconsistency (variation) Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes  Illustration of the problem:  Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. We can do so by having an integer counter that keeps track of the number of full buffers. Initially, counter is set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer.
  • 6. 6.6 Producer-Consumer ProblemProducer-Consumer Problem Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process. unbounded-buffer places no practical limit on the size of the buffer bounded-buffer assumes that there is a fixed buffer size
  • 7. 6.7 ProducerProducer while (true) { /* produce an item in next produced */ while (counter == BUFFER SIZE) ; /* do nothing */ buffer[in] = next produced; in = (in + 1) % BUFFER SIZE; counter++; }
  • 8. 6.8 ConsumerConsumer while (true) { while (counter == 0); /*do nothing*/ next consumed = buffer[out]; out = (out + 1) % BUFFER SIZE; counter--; /*consume the item in next consumed*/ }
  • 9. 6.9 Race ConditionRace Condition counter++ could be implemented as register1 = counter register1 = register1 + 1 counter = register1 counter-- could be implemented as register2 = counter register2 = register2 - 1 counter = register2 Consider this execution interleaving with “count = 5” initially: S0: producer execute register1 = counter {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = counter {register2 = 5} S3: consumer execute register2 = register2 – 1 {register2 = 4} S4: producer execute counter = register1 {counter = 6 } S5: consumer execute counter = register2 {counter = 4} Reg 1 6 5 Reg 2 4 3 2 1
  • 10. 6.10 Race Condition …Race Condition … Notice that we have arrived at the incorrect state “counter == 4”, indicating that four buffers are full, when, in fact, ve buffers are fullfi . If we reversed the order of the statements at T4 and T5 , we would arrive at the incorrect state “counter == 6”.  We would arrive at this incorrect state because we allowed both processes to manipulate the variable counter concurrently. A situation like this, Where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition. Reg 1 6 5 Reg 2 4 3
  • 11. 6.11 Critical Section ProblemCritical Section Problem Consider system of n processes {p0, p1, … pn-1} Each process has critical section segment of code Process may be changing common variables, updating table, writing file, etc. When one process in critical section, no other may be in its critical section. Critical section problem is to design protocol to solve this. Each process must ask permission to enter critical section in entry section, may follow critical section with exit section, then remainder section
  • 12. 6.12 Critical SectionCritical Section General structure of process pi is
  • 13. 6.13 BackgroundBackground Mutual Exclusion  Mechanisms that ensure that only one person or process is doing certain things at one time (others are excluded).  E.g. only one person goes shopping at a time. Critical Section  A section of code, or collection of operations, in which only one process may be executing at a given time.
  • 14. 6.14 Solution to Critical-Section ProblemSolution to Critical-Section Problem 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely.  processes must not leave the critical section empty if they are waiting. 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes
  • 15. 6.15 Peterson’s SolutionPeterson’s Solution Good algorithmic description of solving the problem Two process solution Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. The two processes share two variables: int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical section. The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process Pi is ready!
  • 16. 6.16 Algorithm for ProcessAlgorithm for Process PPii • Provable that: 1. Mutual exclusion is preserved 2. Progress requirement is satisfied 3. Bounded-waiting requirement is met
  • 17. 6.17 Synchronization HardwareSynchronization Hardware Many systems provide hardware support for critical section code All solutions below based on idea of locking  Protecting critical regions via locks Uniprocessors – could disable interrupts The critical-section problem could be solved simply in a single- processor environment if we could prevent interrupts from occurring while a shared variable was being modi ed.fi Currently running code would execute without preemption Generally too inefficient on multiprocessor systems  Operating systems using this not broadly scalable Modern machines provide special atomic hardware instructions  Atomic = non-interruptible Either test memory word and set value Or swap contents of two memory words
  • 18. 6.18 Mutex LocksMutex Locks OS designers build software tools to solve critical section problem Simplest is mutex lock  In fact, the term mutex is short for mutual exclusion We use the mutex lock to protect critical regions and thus prevent race conditions. That is, a process must acquire the lock before entering a critical section; it releases the lock when it exits the critical section. Product critical regions with it by first acquire() a lock then release() it Boolean variable indicating if lock is available or not Calls to acquire() and release() must be atomic Usually implemented via hardware atomic instructions But this solution requires busy waiting This lock therefore called a spinlock
  • 19. 6.19 acquire() and release()acquire() and release()  The acquire()function acquires the lock, and the release() function releases the lock, as illustrated in Figure x. Figure x. Solution to the critical-section problem using mutex locks.
  • 20. 6.20 SemaphoreSemaphore Synchronization tool that does not require busy waiting Semaphore S – integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Less complicated Can only be accessed via two indivisible (atomic) operations wait (S) { while (S <= 0) ; // busy wait S--; } signal (S) { S++; }
  • 21. 6.21 Semaphore UsageSemaphore Usage Semaphore as General Synchronization Tool Counting semaphore – integer value can range over an unrestricted domain Binary semaphore – integer value can range only between 0 and 1 Then a mutex lock Can implement a counting semaphore S as a binary semaphore Can solve various synchronization problems Consider P1 and P2 that require S1 to happen before S2 P1: S1; signal(synch); P2: wait(synch); S2;
  • 22. 6.22 Semaphore ImplementationSemaphore Implementation Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time. Thus, implementation becomes the critical section problem where the wait and signal code are placed in the critical section.  Could now have busy waiting in critical section implementation  But implementation code is short  Little busy waiting if critical section rarely occupied Note that applications may spend lots of time in critical sections and therefore this is not a good solution. In the busy waiting, CPU cycles will be wasted, because of dispatch busy waiting  So, Remove it from the RQ to improve performance.
  • 23. 6.23 Semaphore Implementation with no BusySemaphore Implementation with no Busy waitingwaiting With each semaphore there is an associated waiting queue (WQ) Each entry in a waiting queue has two data items: value (of type integer) pointer to next record in the list Two operations: block – place the process invoking the operation on the appropriate waiting queue. wakeup – remove one of processes in the waiting queue and place it in the ready queue (RQ)
  • 24. 6.24 Semaphore Implementation with no Busy waitingSemaphore Implementation with no Busy waiting ……  If process P finds (S <= 0), it blocks it self, and moves it from the RQ to the WQ associated with semaphore S  P Block.  block – place the process invoking the operation on the appropriate waiting queue.  when S becomes > zero, P will be moved back to the RQ P Wakeup,  wakeup – remove one of processes in the waiting queue and place it in the ready queue.  So, P disappeared and hidden from the schedule and CPU until the S becomes Ready.  Deadlock and Starvation  Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.  Starvation – indefinite blocking  A process may never be removed from the semaphore queue in which it is suspended.
  • 25. 6.25 Classical Problems ofClassical Problems of SynchronizationSynchronization Classical problems used to test newly-proposed synchronization schemes Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem
  • 26. 6.26 Bounded-Buffer ProblemBounded-Buffer Problem N buffers, each can hold one item Semaphore mutex initialized to the value 1 Semaphore full initialized to the value 0 Semaphore empty initialized to the value N.  Consumer must wait until buffer full to consume  Producer must wait until buffer empty to produce
  • 27. 6.27 Readers-Writers ProblemReaders-Writers Problem A data set is shared among a number of concurrent processes Readers – only read the data set; they do not perform any updates Writers – can both read and write. Problem – allow multiple readers to read at the same time.  Only one single writer can access the shared data at the same time. Several variations of how readers and writers are treated – all involve priorities. Shared Data Data set Semaphore rw_mutex initialized to 1 Semaphore mutex initialized to 1 Integer read_count initialized to 0
  • 28. 6.28 Readers-Writers Problem VariationsReaders-Writers Problem Variations First variation – no reader kept waiting unless writer has permission to use shared object Second variation – once writer is ready, it performs write asap Both may have starvation leading to even more variations Problem is solved on some systems by kernel providing reader- writer locks
  • 29. 6.29 Dining-Philosophers ProblemDining-Philosophers Problem Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1 chopstick [5] = {1,1,1,1,1} Philosophers Either think or eat Share a circular table having  n chairs (for n philosophers)  n single chopsticks When hungry  Try to pick up two chopsticks (left and right)  Can eat only when she has both chopsticks When finished eating  Put down both chopsticks  Start thinking
  • 30. 6.30 Problems with SemaphoresProblems with Semaphores Incorrect use of semaphore operations: signal (mutex) …. wait (mutex) wait (mutex) … wait (mutex) Omitting of wait (mutex) or signal (mutex) (or both) Deadlock and starvation.
  • 31. Chapter 5: Process SynchronizationChapter 5: Process Synchronization Operating Systems (0307371) Dr. Ra’Fat Ahmad AL-msie’Deen https://sites.google.com/site/ralmsideen/ End of Chapter 5End of Chapter 5