Race Conditions: Process Synchronization
Race Conditions: Process Synchronization
Types of synchronization:
* Barrier
* Lock/semaphore
* Non-blocking synchronization
* Synchronous communication operations
RACE CONDITIONS
* In operating systems, processes that are working together share some common storage
(main memory, file etc.) that each process can read and write. When two or more processes
are reading or writing some shared data and the final result depends on who runs precisely
when, are called race conditions. Concurrently executing threads that share data need to
synchronize their operations and processing in order to avoid race condition on shared data.
Only one ‘customer’ thread at a time should be allowed to examine and update the shared
variable.
* Race conditions are also possible in Operating Systems. If the ready queue is implemented
as a linked list and if the ready queue is being manipulated during the handling of an interrupt,
then interrupts must be disabled to prevent another interrupt before the first one completes.
If interrupts are not disabled than the linked list could become corrupt.
Critical Section
The key to preventing trouble involving shared storage is find some way to prohibit more than
one process from reading and writing the shared data simultaneously. That part of the
program where the
shared memory is accessed is called the Critical Section. To avoid race conditions and flawed
results, one must identify codes in Critical Sections in each thread.
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes
can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the processes that will enter the
critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted
The characteristic properties of the code that form a Critical Section are
* Codes that reference one or more variables in a “read-update-write” fashion while any of
those variables is possibly being altered by another thread.
* Codes that alter one or more variables that are possibly being referenced in “read-updata-
write” fashion by another thread.
• Codes use a data structure while any part of it is possibly being altered by another thread.
* Codes alter any part of a data structure while it is possibly in use by another thread.
Here, the important point is that when one process is executing shared modifiable data in its
critical section, no other process is to be allowed to execute in its critical section. Thus, the
execution of critical sections by the processes is mutually exclusive in time.
MUTUAL EXCLUSION
A way of making sure that if one process is using a shared modifiable data, the other
processes will be excluded from doing the same thing.
Formally, while one process executes the shared variable, all other processes desiring to do
so at the same time moment should be kept waiting; when that process has finished
executing the shared
variable, one of the processes waiting; while that process has finished executing the shared
variable, one of the processes waiting to do so should be allowed to proceed. In this fashion,
each process
executing the shared data (variables) excludes all others from doing so simultaneously. This
is called Mutual Exclusion.
Note
that mutual exclusion needs to be enforced only when processes access shared modifiable
data - when processes are performing operations that do not conflict with one another they
should be allowed to proceed concurrently.
MUTUAL EXCLUSION CONDITIONS
If we could arrange matters such that no two processes were ever in their critical sections
simultaneously, we could avoid race conditions. We need four conditions to hold to have a
good
solution for the critical section problem (mutual exclusion).
•No two processes may at the same moment inside their critical sections.
•No assumptions are made about relative speeds of processes or number of CPUs.
•No process should outside its critical section should block other processes.
•No process should wait arbitrary long to enter its critical section.
The mutual exclusion problem is to devise a pre-protocol (or entry protocol) and a post-
protocol (or exist protocol) to keep two or more threads from being in their critical sections at
the same time.
Tanenbaum examine proposals for critical-section problem or mutual exclusion problem.
Problem
When one process is updating shared modifiable data in its critical section, no other process
should allowed to enter in its critical section.
Conclusion
In this solution, we consider a single, shared, (lock) variable, initially 0. When a process wants
to enter in its critical section, it first test the lock. If lock is 0, the process first sets it to 1 and
then enters the critical section. If the lock is already 1, the process just waits until (lock)
variable becomes 0. Thus, a 0 means that no process in its critical section, and 1 means hold
your horses - some process is in its critical section.
Conclusion
The flaw in this proposal can be best explained by example. Suppose process A sees that the
lock is 0. Before it can set the lock to 1 another process B is scheduled, runs, and sets the
lock to 1. When the process A runs again, it will also set the lock to 1, and two processes will
be in their critical section
simultaneously.
In this proposed solution, the integer variable 'turn' keeps track of whose turn is to enter the
critical
section. Initially, process A inspect turn, finds it to be 0, and enters in its critical section.
Process B also finds it to be 0 and sits in a loop continually testing 'turn' to see when it
becomes 1.Continuously
testing a variable waiting for some value to appear is called the Busy-Waiting.
Conclusion
Taking turns is not a good idea when one of the processes is much slower than the other.
Suppose
process 0 finishes its critical section quickly, so both processes are now in their noncritical
section. This situation violates above mentioned condition 3.
Basically, what above mentioned solution do is this: when a processes wants to enter in its
critical
section , it checks to see if then entry is allowed. If it is not, the process goes into tight loop
and waits
(i.e., start busy waiting) until it is allowed to enter. This approach waste CPU-time.
Now look at some interprocess communication primitives is the pair of steep-wakeup.
•Sleep
* It is a system call that causes the caller to block, that is, be suspended until some other
process wakes it up.
•Wakeup
The bounded buffer producers and consumers assumes that there is a fixed buffer size i.e., a
finite
numbers of slots are available.
Statement
To suspend the producers when the buffer is full, to suspend the consumers when the buffer
is empty, and to make sure that only one process at a time manipulates a buffer so there are
no race conditions or lost updates.
As an example how sleep-wakeup system calls are used, consider the producer-consumer
problem also known as bounded buffer problem.
Two processes share a common, fixed-size (bounded) buffer. The producer puts information
into the
buffer and the consumer takes information out.
1. The producer wants to put a new data in the buffer, but buffer is already full.
Solution: Producer goes to sleep and to be awakened when the consumer has removed data.
2. The consumer wants to remove data the buffer but buffer is already empty.
Solution: Consumer goes to sleep until the producer puts some data in buffer and wakes
consumer up.
Conclusion
This approaches also leads to same race conditions we have seen in earlier approaches.
Race condition can occur due to the fact that access to 'count' is unconstrained. The essence
of the problem is that a wakeup call, sent to a process that is not sleeping, is lost.